Attack on freedom of expression? The EU now obliges digital services to combat "disinformation"

Legislation at the European level rarely attracts much attention. The Digital Services Act (DSA) is an exception. This EU regulation, which first entered into force in November 2022 and has been continuously amended since then, aims to create uniform regulation for digital services, especially for large platforms such as X, Instagram , and TikTok, whose influence on social discourse is constantly growing.
Critics see the DSA as an attack on freedom of expression online, while supporters consider it a necessary framework to limit the market power of major online services and minimize abuse, for example by hostile foreign actors. Since the beginning of July, the so-called "Code of Practice on Disinformation " has been part of the DSA. What does this mean for the debate about freedom of expression online? Will it lead to increased government intervention?
New Code: What is disinformation anyway?In an interview with the Berliner Zeitung, Matthias Kettemann emphasized that the DSA has been in place for several years and that proceedings are already underway against major platform companies such as X, Meta, and TikTok for violations of the DSA. The internet researcher at the Humboldt Institute for Internet and Society in Berlin describes the goal of the DSA as "regulated self-regulation." The aim is to create a regulatory framework within which the industry can define further rules and best practices for itself.
The Code of Conduct on Disinformation is an example of such self-regulation. The precursors to the Code were developed in 2015, formulated as a Code in 2018, and adopted by the EU Commission in February of this year. The Code has been signed by Google , Facebook, Instagram, and TikTok, among others—but not by X. Digital platforms are not required to adhere to the Code, Kettemann said, as long as they can demonstrate that they ensure an "equally high level of protection."
However, the term "disinformation" itself is a highly contested one. Researcher Kettemann says that disinformation is not clearly defined legally. The code also lacks a conceptual definition. Generally, disinformation is understood as "false or incomplete information shared for specific political and strategic goals," for example, by foreign actors. The algorithms of major digital platforms reward emotionally charged content with a high dwell time, which is why disinformation often spreads so widely, Kettemann continues.
False information is protected by freedom of expressionUntruths are protected by freedom of expression – it is not forbidden to publish false information. However, legislators at the European level fear that disinformation on the internet can lead to social destabilization. Therefore, the DSA Code of Conduct encourages digital services to use their own resources to curb the spread of disinformation.
The code contains concrete measures against disinformation: Accounts that spread such content are to be demonetized, meaning they will no longer be able to monetize their reach. Furthermore, political advertising is to be clearly labeled. The services also commit to consistently working with so-called "fact checkers" who review and classify content for its veracity. Another measure is to provide researchers with access to data that they can use to research disinformation campaigns. Compliance with the code is to be monitored through regular reports from the EU Commission .
Observers, in particular, question the neutrality of "fact checkers." Critics say that if they are firmly integrated into the platforms, they could spread their own political agendas as supposedly neutral entities. For example, during the coronavirus pandemic, the suggestion that the virus could have originated in a laboratory in Wuhan was portrayed as false and some were deleted from digital platforms. Since then, various government agencies have admitted that the laboratory theory is a likely explanation for the origin of the coronavirus, perhaps the most likely one. This includes the German Federal Intelligence Service . It is unclear how disinformation is determined in such a case.
Media scientist: Platforms rarely block posts on content-related groundsInternet researcher Kettemann emphasizes that in many cases, most digital services do not specifically block content, but rather punish violations of their own terms and conditions. Due to a large number of court cases, the platforms have also become more cautious about arguing against individual posts based on their content. Therefore, they are taking action against "coordinated inauthentic" user behavior, for example, when bots are controlled and spread false information under false pretenses. Kettemann therefore does not view the DSA as an instrument of state censorship, but rather as necessary content moderation.
This is opposed by the position of author Jakob Schirrmacher, for example. Speaking on Platform X, he describes the current code as an "age of preemptive censorship" that is the "nightmare of every free society." He particularly criticizes the role of so-called "trusted flaggers" such as HateAid or the European Digital Media Observatory (EDMO) project, which can now access data from the signatory digital platforms. HateAid became known for its legal support of left-wing politicians such as Renate Künast of the Green Party in lawsuits against alleged hate speech.
Schirrmacher therefore fears an increase in the prosecution of politically undesirable internet users. At the end of last year, the case of a Bavarian pensioner became known. He was subjected to a house search for sharing a meme about then-Vice Chancellor Robert Habeck (Green Party). Schirrmacher writes: "Soon there will be 100 house searches per day."
Berliner-zeitung